19 research outputs found

    A Hierarchical Bayesian Trust Model based on Reputation and Group Behaviour

    No full text
    In many systems, agents must rely on their peers to achieve their goals. However, when trusted to perform an action, an agent may betray that trust by not behaving as required. Agents must therefore estimate the behaviour of their peers, so that they may identify reliable interaction partners. To this end, we present a Bayesian trust model (HABIT) for assessing trust based on direct experience and (potentially unreliable) reputation. Although existing approaches claim to achieve this, most rely on heuristics with little theoretical foundation. In contrast, HABIT is based on principled statistical techniques; can be used with any representation of behaviour; and can assess trust based on observed similarities between groups of agents. In this paper, we describe the theoretical aspects of the model and present experimental results in which HABIT was shown to be up to twice as accurate at predicting trustee performance as an existing state-of-the-art trust model

    Agent-based virtual organisations for the Grid

    Full text link
    The ability to create reliable, scalable virtual organisations (VOs) on demand in a dynamic, open and competitive environment is one of the challenges that underlie Grid computing. In response, in the CONOISE-G project, we are developing an infrastructure to support robust and resilient virtual organisation formation and operation. Specifically, CONOISE-G provides mechanisms to assure effective operation of agent-based VOs in the face of disruptive and potentially malicious entities in dynamic, open and competitive environments. In this paper, we describe the CONOISE-G system, outline its use in VO formation and perturbation, and review current work on dealing with unreliable information sources

    Coping with Inaccurate Reputation Sources: Experimental Analysis of a Probabilistic Trust Model

    No full text
    This research aims to develop a model of trust and reputation that will ensure good interactions amongst software agents in large scale open systems. The following are key drivers for our model: (1) agents may be self-interested and may provide false accounts of experiences with other agents if it is beneficial for them to do so; (2) agents will need to interact with other agents with which they have little or no past experience. Against this background, we have developed TRAVOS (Trust and Reputation model for Agentbased Virtual OrganisationS) which models an agent's trust in an interaction partner. Specifically, trust is calculated using probability theory taking account of past interactions between agents. When there is a lack of personal experience between agents, the model draws upon reputation information gathered from third parties. In this latter case, we pay particular attention to handling the possibility that reputation information may be inaccurate
    corecore